Session D-3

Backscatter and RF

Conference
10:00 AM — 11:30 AM EDT
Local
May 12 Wed, 7:00 AM — 8:30 AM PDT

RapidRider: Efficient WiFi Backscatter with Uncontrolled Ambient Signals

Qiwei Wang (University of Science and Technology of China, China); Si Chen and Jia Zhao (Simon Fraser University, Canada); Wei Gong (University of Science and Technology of China, China)

0
This paper presents RapidRider, the first WiFi backscatter system that takes uncontrolled OFDM WiFi signals, e.g., 802.11a/g/n, as excitations and efficiently embeds tag data at the single-symbol rate. Such design brings us closer to the dream of pervasive backscatter communication since uncontrolled WiFi signals are everywhere. Specifically, we show that RapidRider can demodulate tag data for each OFDM symbol while previous systems rely on multi-symbol demodulation. Further, we design deinterleaving-twins decoding that enables RapidRider to use any uncontrolled WiFi signals as carriers. We prototype RapidRider using FPGAs, commodity radios, and USRPs. Comprehensive evaluations show that RapidRider's maximum throughput is 3.92x and 1.97x better than FreeRider and MOXcatter. To accommodate cases where there is only one receiver available, we design RapidRider+ that can take productive data and tag data on the same packet. Results demonstrate that it can achieve an aggregated goodput of productive and tag data around 1 Mbps on average.

Turbocharging Deep Backscatter Through Constructive Power Surges with a Single RF Source

Zhenlin An, Qiongzheng Lin and Qingrui Pan (The Hong Kong Polytechnic University, Hong Kong); Lei Yang (The Hong Kong Polytechnic University, China)

0
Backscatter networks are becoming a promising solution for embedded sensing. In these networks, backscatter sensors are deeply implanted inside objects or living beings and form a deep backscatter network (DBN). The fundamental challenges in DBNs are the significant attenuation of the wireless signal caused by environmental materials (e.g., water and bodily tissues) and the miniature antennas of the implantable backscatter sensors, which prevent existing backscatter networks from powering sensors beyond superficial depths. This study presents RiCharge, a turbocharging solution that enables powering up and communicating with DBNs through a single augmented RF source, which allows existing backscatter sensors to serve DBNs at zero startup cost. The key contribution of RiCharge is the turbocharging algorithm that utilizes RF surges to induce constructive power surges at deep backscatter sensors in accordance with the FCC regulations, for overcoming the turn-on voltage barrier. RiCharge is implemented in commodity devices, and the evaluation result reveals that RiCharge can use only a single RF source to power up backscatter sensors at 60 m distance in the air (i.e., 10x longer than a commercial off-the-shelf reader) and 50 cm-depth under water (i.e., 2x deeper than the previous record).

Physical Layer Key Generation between Backscatter Devices over Ambient RF Signals

Pu Wang (Xidian University, China); Long Jiao and Kai Zeng (George Mason University, USA); Zheng Yan (Xidian University & Aalto University, China)

0
Ambient backscatter communication (AmBC), which enables energy harvesting and ultra-low-power communication by utilizing ambient radio frequency (RF) signals, has emerged as a cutting-edge technology to realize numerous Internet of Things (IoT) applications. However, the current literature lacks efficient secret key sharing solutions for resource-limited devices in AmBC systems to protect the backscatter communications, especially for private data transmission. Thus, we propose a novel physical layer key generation scheme between backscatter devices (BDs) by exploiting received superposed ambient signals. Based on the repeated patterns (i.e., cyclic prefix in OFDM symbols) in ambient RF signals, we present a joint transceiver design of BD backscatter waveform and BD receiver to extract the downlink signal and the backscatter signal from the superposed signals. By multiplying the downlink signal and the backscatter signal, we can actually obtain the triangle channel information as a shared random secret source for key generation. Besides, we study the trade-off between the rate of secret key generation and harvested energy by modeling it as a joint optimization problem. Finally, extensive numerical simulations are provided to evaluate the key generation performance, energy harvesting performance, and their trade-offs under various system settings.

Signal Detection and Classification in Shared Spectrum: A Deep Learning Approach

Wenhan Zhang, Mingjie Feng, Marwan Krunz and Amir Hossein Yazdani Abyaneh (University of Arizona, USA)

1
Accurate identification of the signal type in shared-spectrum networks is critical for efficient resource allocation and fair coexistence. It can be used for scheduling transmission opportunities to avoid collisions and improve system throughput, especially when the environment changes rapidly. In this paper, we develop deep neural networks (DNNs) to detect coexisting signal types based on In-phase/Quadrature (I/Q) samples without decoding them. By using segments of the samples of the received signal as input, a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) are combined and trained using categorical cross-entropy (CE) optimization. Classification results for coexisting Wi-Fi, LTE LAA, and 5G NR-U signals in the 5-6 GHz unlicensed band show high accuracy of the proposed design. We then exploit spectrum analysis of the I/Q sequences to further improve the classification accuracy. By applying Short-time Fourier Transform (STFT), additional information in the frequency domain can be presented as a spectrogram. Accordingly, we enlarge the input size of the DNN. To verify the effectiveness of the proposed detection framework, we conduct over-the-air (OTA) experiments using USRP radios. The proposed approach can achieve accurate classification in both simulations and hardware experiments.

Session Chair

Wei Gao (University of Pittsburgh)

Session D-4

Human Sensing

Conference
12:00 PM — 1:30 PM EDT
Local
May 12 Wed, 9:00 AM — 10:30 AM PDT

AWash: Handwashing Assistance for the Elderly With Dementia via Wearables

Yetong Cao, Huijie Chen, Fan Li and Song Yang (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Hand hygiene has a significant impact on human health. Proper handwashing, having a crucial effect on reducing bacteria, serves as the cornerstone of hand hygiene. For the elder with dementia, they suffer from a gradual loss of memory and difficulty in coordinating steps in the execution of handwashing. Proper assistance should be provided to them to ensure their hand hygiene adherence. Toward this end, we propose AWash, leveraging only commodity IMU sensor mounted on most wrist-worn devices (e.g., smartwatches) to characterize hand motions and provide assistance accordingly. To handle particular interference of senile dementia patients in IMU sensor readings, we design a number of effective techniques to segment handwashing actions, transform sensory input to body coordinate system, and extract sensor-body inclination angles. A hybrid neural network model is used to enable AWash to generalize to new users without retraining or adaptation, avoiding the trouble of collecting behavior information of every user. To meet the diverse needs of users with various executive functioning, we use a state machine to make prompt decisions, which supports customized assistance. Extensive experiments on a prototype with eight older participants demonstrate that AWash can increase the user's independence in the execution of handwashing.

vGaze: Implicit Saliency-Aware Calibration for Continuous Gaze Tracking on Mobile Devices

Songzhou Yang, Yuan He and Meng Jin (Tsinghua University, China)

1
Gaze tracking is a useful human-to-computer interface, which plays an increasingly important role in a range of mobile applications. Gaze calibration is an indispensable component of gaze tracking, which transforms the eye coordinates to the screen coordinates. The existing approaches of gaze tracking either have limited accuracy or require the user's cooperation in calibration and in turn hurt the quality of experience. We in this paper propose vGaze, implicit saliency-aware calibration for continuous gaze tracking on mobile devices. The design of vGaze stems from our insight on the temporal and spatial dependent relation between the visual saliency and the user's gaze. vGaze is implemented as a light-weight software that identifies video frames with "useful" saliency information, sensing the user's head movement, and performs opportunistic calibration using only those "useful" frames. We implement vGaze on a commercial mobile device and evaluate its performance in various scenarios. The results show that vGaze can work at real time with video playback applications. The average error of gaze tracking is 1.51cm.

PALMAR: Towards Adaptive Multi-inhabitant Activity Recognition in Point-Cloud Technology

Mohammad Arif Ul Alam, Md Mahmudur Rahman and Jared Widberg (University of Massachusetts Lowell, USA)

0
With the advancement of deep neural networks and computer vision-based Human Activity Recognition, employment of Point-Cloud Data technologies (LiDAR, mmWave) has seen a lot interests due to its privacy preserving nature. Given the high promise of accurate PCD technologies, we develop, PALMAR, a multiple-inhabitant activity recognition system by employing efficient signal processing and novel machine learning techniques to track individual person towards developing an adaptive multi-inhabitant tracking and HAR system. More specifically, we propose (i) a voxelized feature representation-based real-time PCD fine-tuning method, (ii) efficient clustering (DBSCAN and BIRCH), Adaptive Order Hidden Markov Model based multi-person tracking and crossover ambiguity reduction techniques and (iii) novel adaptive deep learning-based domain adaptation technique to improve the accuracy of HAR in presence of data scarcity and diversity (device, location and population diversity). We experimentally evaluate our framework and systems using (i) a real-time PCD collected by three devices (3D LiDAR and 79 GHz mmWave) from 6 participants, (ii) one publicly available 3D LiDAR activity data (28 participants) and (iii) an embedded hardware prototype system which provided promising HAR performances in multi-inhabitants (96%) scenario with a 63% improvement of multi-person tracking than state-of-art framework without losing significant system performances in the edge computing device.

SmartDistance: A Mobile-based Positioning System for Automatically Monitoring Social Distance

Li Li (Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences, China); Xiaorui Wang (The Ohio State University, USA); Wenli Zheng (Shanghai Jiaotong University, China); Cheng-Zhong Xu (University of Macau, China)

0
Coronavirus disease 2019 (COVID-19) has resulted in an ongoing pandemic. Since COVID-19 spreads mainly via close contact among people, social distancing has become an effective manner to slow down the spread. However, completely forbidding close contact can also lead to unacceptable damage to the society. Thus, a system that can effectively monitor people's social distance and generate corresponding alerts when a high infection probability is detected is in urgent need.

In this paper, we propose SmartDistance, a smartphone based software framework that monitors people's interaction in an effective manner, and generates a reminder whenever the infection probability is high. Specifically, SmartDistance dynamically senses both the relative distance and orientation during social interaction with a well-designed relative positioning system. In addition, it recognizes different events (e.g., speaking, coughing) and determines the infection space through a droplet transmission model. With event recognition and relative positioning, SmartDistance effectively detects risky social interaction, generates an alert immediately, and records the relevant data for close contact reporting. We prototype SmartDistance on different Android smartphones, and the evaluation shows it reduces the false positive rate from 33% to 1% and the false negative rate from 5% to 3% in infection risk detection.

Session Chair

Aaron Striegel (University of Notre Dame)

Session D-5

Human Sensing 2

Conference
2:30 PM — 4:00 PM EDT
Local
May 12 Wed, 11:30 AM — 1:00 PM PDT

CanalScan: Tongue-Jaw Movement Recognition via Ear Canal Deformation Sensing

Yetong Cao, Huijie Chen and Fan Li (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Human-machine interface based on tongue-jaw movements has recently become one of the major technological trends. However, existing schemes have several limitations, such as requiring dedicated hardware and are usually uncomfortable to wear. This paper presents CanalScan, a nonintrusive system for tongue-jaw movement recognition using only commodity speaker and microphone mounted on ubiquitous off-the-shelf devices (e.g., smartphones). The basic idea is to send an acoustic signal, then captures its reflections and derive unique patterns of ear canal deformation caused by tongue-jaw movements. A dynamic segmentation method with Support Vector Domain Description is used to segment tongue-jaw movements. To combat sensor position-sensitive deficiency and ear-canal-shape-sensitive deficiency in multi-path reflections, we first design algorithms to assist users in adjusting the acoustic sensors to the same valid zone. Then we propose a data transformation mechanism to reduce the impacts of diversities in ear canal shapes and relative positions between sensors and the ear canal. CanalScan explores twelve unique and consistent features and applies a Random Forest classifier to distinguish tongue-jaw movements. Extensive experiments with twenty participants demonstrate that CanalScan achieves promising recognition for six tongue-jaw movements, is robust against various usage scenarios, and can be generalized to new users without retraining and adaptation.

HearFit: Fitness Monitoring on Smart Speakers via Active Acoustic Sensing

Yadong Xie, Fan Li and Yue Wu (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Fitness can help to strengthen muscles, increase resistance to diseases and improve body shape. Nowadays, more and more people tend to exercise at home/office, since they lack time to go to the dedicated gym. However, it is difficult for most of them to get good fitness effect due to the lack of professional guidance. Motivated by this, we propose HearFit, the first non-invasive fitness monitoring system based on commercial smart speakers for home/office environments. To achieve this, we turn smart speakers into active sonars. We design a fitness detection method based on Doppler shift and adopt the short time energy to segment fitness actions. We design a high-accuracy LSTM network to determine the type of fitness. Combined with incremental learning, users can easily add new actions. Finally, we evaluate the local (i.e., intensity and duration) and global (i.e., continuity and smoothness) fitness quality of users to help to improve fitness effect and prevent injury. Through extensive experiments including over 7,000 actions of 10 types of fitness with and without dumbbells from 12 participants, HearFit can detect fitness actions with an average accuracy of 96.13%, and give accurate statistics in various environments.

RespTracker: Multi-user Room-scale Respiration Tracking with Commercial Acoustic Devices

Haoran Wan, Shuyu Shi, Wenyu Cao, Wei Wang and Guihai Chen (Nanjing University, China)

1
Continuous domestic respiration monitoring provides vital information for diagnosing assorted diseases. In this paper, we introduce RESPTRACKER, the first continuous, multiple-person respiration tracking system in domestic settings using acoustic-based COTS devices. RESPTRACKER uses a two-stage algorithm to separate and recombine respiration signals from multiple paths in a short period so that it can track the respiration rate of multiple moving subjects. Our experimental results show that our two-stage algorithm can distinguish the respiration of at least four subjects at a distance of three meters.

Mobile Crowdsensing for Data Freshness: A Deep Reinforcement Learning Approach

Zipeng Dai, Hao Wang, Chi Harold Liu and Rui Han (Beijing Institute of Technology, China); Jian Tang (Syracuse University, USA); Guoren Wang (Northeastern University, China)

0
Data collection by mobile crowdsensing (MCS) is emerging as data sources for smart city applications, however how to ensure data freshness has sparse research exposure but quite important in practice. In this paper, we consider to use a group of mobile agents (MAs) like UAVs and driverless cars which are equipped with multiple antennas to move around in the task area to collect data from deployed sensor nodes (SNs). Our goal is to minimize the age of information (AoI) of all SNs and energy consumption of MAs during movement and data upload. To this end, we propose a centralized deep reinforcement learning (DRL)-based solution called "DRL-freshMCS" for controlling MA trajectory planning and SN scheduling. We further utilize implicit quantile networks to maintain the accurate value estimation and steady policies for MAs. Then, we design an exploration and exploitation mechanism by dynamic distributed prioritized experience replay. We also derive the theoretical lower bound for episodic AoI. Extensive simulation results show that DRL-freshMCS significantly reduces the episodic AoI per remaining energy, compared to five baselines when varying different number of antennas and data upload thresholds, and number of SNs. We also visualize their trajectories and AoI update process for clear illustrations.

Session Chair

Ming Li (University of Texas Arlington)

Session D-6

Crowdsourcing

Conference
4:30 PM — 6:00 PM EDT
Local
May 12 Wed, 1:30 PM — 3:00 PM PDT

Crowdsourcing System for Numerical Tasks based on Latent Topic Aware Worker Reliability

Zhuan Shi, Shanyang Jiang and Lan Zhang (University of Science and Technology of China, China); Yang Du (Soochow University, China); Xiang-Yang Li (University of Science and Technology of China, China)

1
Crowdsourcing is a widely adopted way for various labor-intensive tasks. One of the core problems in crowdsourcing systems is how to assign tasks to most suitable workers for better results, which heavily relies on the accurate profiling of each worker's reliability for different topics of tasks. Many previous work have studied worker reliability for either explicit topics represented by task descriptions or latent topics for categorical tasks. In this work, we aim to accurately estimate more fine-grained worker reliability for latent topics in numerical tasks, so as to further improve the result quality. We propose a bayesian probabilistic model named Gaussian Latent Topic Model(GLTM) to mine the latent topics of numerical tasks based on workers' behaviors and to estimate workers' topic-level reliability. By utilizing the GLTM, we propose a truth inference algorithm named TI-GLTM to accurately infer the tasks' truth and topics simultaneously and dynamically update workers' topic-level reliability. We also design an online task assignment mechanism called MRA-GLTM, which assigns appropriate tasks to workers with the Maximum Reduced Ambiguity principle. The experiment results show our algorithms can achieve significantly lower MAE and MSE than that of the state-of-the-art approaches.

Strategic Information Revelation in Crowdsourcing Systems Without Verification

Chao Huang (The Chinese University of Hong Kong, Hong Kong); Haoran Yu (Beijing Institute of Technology, China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China); Randall A Berry (Northwestern University, USA)

2
We study a crowdsourcing problem where the platform aims to incentivize distributed workers to provide high-quality and truthful solutions without the ability to verify the solutions. While most prior work assumes that the platform and workers have symmetric information, we study an asymmetric information scenario where the platform has informational advantages. Specifically, the platform knows more information regarding workers' average solution accuracy, and can strategically reveal such information to workers. Workers will utilize the announced information to determine the likelihood that they obtain a reward if exerting effort on the task. We study two types of workers: (1) naive workers who fully trust the announcement, and (2) strategic workers who update prior belief based on the announcement. For naive workers, we show that the platform should always announce a high average accuracy to maximize its payoff. However, this is not always optimal for strategic workers, as it may reduce the credibility of the platform's announcement and hence reduce the platform's payoff. Interestingly, the platform may have an incentive to even announce an average accuracy lower than the actual value when facing strategic workers. Another counter-intuitive result is that the platform's payoff may decrease in the number of high-accuracy workers.

Minimizing Entropy for Crowdsourcing with Combinatorial Multi-Armed Bandit

Yiwen Song and Haiming Jin (Shanghai Jiao Tong University, China)

1
Nowadays, crowdsourcing has become an increasingly popular paradigm for large-scale data collection, annotation, and classification. Today's rapid growth of crowdsourcing platforms calls for effective worker selection mechanisms, which oftentimes have to operate with a priori unknown worker reliability. We discover that the empirical entropy of workers' results, which measures the uncertainty in the final aggregated results, naturally becomes a suitable metric to evaluate the outcome of crowdsourcing tasks. Therefore, this paper designs a worker selection mechanism that minimizes the empirical entropy of the results submitted by participating workers. Specifically, we formulate worker selection under sequentially arriving tasks as a combinatorial multi-armed bandit problem, which treats each worker as an arm, and aims at learning the best combination of arms that minimize the cumulative empirical entropy. By information theoretic methods, we carefully derive an estimation of the upper confidence bound for empirical entropy minimization, and leverage it in our minimum entropy upper confidence bound (ME-UCB) algorithm to balance exploration and exploitation. Theoretically, we prove that ME-UCB has a regret upper bound of O(1), which surpasses existing submodular UCB algorithms. Our extensive experiments with both a synthetic and real-world dataset empirically demonstrate that our ME-UCB algorithm outperforms other state-of-the-art approaches.

Distributed Neighbor Distribution Estimation with Adaptive Compressive Sensing in VANETs

Yunxiang Cai, Hongzi Zhu and Xiao Wang (Shanghai Jiao Tong University, China); Shan Chang (Donghua University, China); Jiangang Shen and Minyi Guo (Shanghai Jiao Tong University, China)

2
Acquiring the geographical distribution of neighbors can support more adaptive media access control (MAC) protocols and other safety applications in Vehicular ad hoc network (VANETs). However, it is very challenging for each vehicle to estimate its own neighbor distribution in a fully distributed setting. In this paper, we propose an online distributed neighbor distribution estimation scheme, called PeerProbe, in which vehicles collaborate with each other to probe their own neighborhood via simultaneous symbol-level wireless communication. An adaptive compressive sensing algorithm is developed to recover a neighbor distribution based on a small number of random probes with non-negligible noise. Moreover, the needed number of probes adapts to the sparseness of the distribution. We conduct extensive simulations and the results demonstrate that PeerProbe is lightweight and can accurately recover highly dynamic neighbor distributions in critical channel conditions.

Session Chair

Qinghua Li (University of Arkansas)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · © 2021 Duetone Corp.